72,425 research outputs found

    Blind Image Restoration via the Integration of Stochastic and Deterministic Methods

    Get PDF
    This paper addresses the image restoration problem which remains a significant field of image processing. The fields of experts- (FoE-) based image restoration has been discussed and some open issues including noise estimation and parameter selection have been approached. The stochastic method FoE performs fairly well; meanwhile it might also produce unsatisfactory outcome especially when the noise is grave. To improve the final performance, we introduce the integration with deterministic method K-SVD. The FoE-treated image has been used to obtain the dictionary, and with the help of sparse and redundant representation over trained dictionary, the K-SVD algorithm can dramatically solve the problem, even though the pretreated result is of poor quality under severe noise condition. The experimental results via our proposed method are demonstrated and compared in detail. Meanwhile the test results from both qualitative and quantitative aspects are given, which present the better performance over current state-of-art related restoration algorithms

    A higher-order MRF based variational model for multiplicative noise reduction

    Get PDF
    The Fields of Experts (FoE) image prior model, a filter-based higher-order Markov Random Fields (MRF) model, has been shown to be effective for many image restoration problems. Motivated by the successes of FoE-based approaches, in this letter, we propose a novel variational model for multiplicative noise reduction based on the FoE image prior model. The resulted model corresponds to a non-convex minimization problem, which can be solved by a recently published non-convex optimization algorithm. Experimental results based on synthetic speckle noise and real synthetic aperture radar (SAR) images suggest that the performance of our proposed method is on par with the best published despeckling algorithm. Besides, our proposed model comes along with an additional advantage, that the inference is extremely efficient. {Our GPU based implementation takes less than 1s to produce state-of-the-art despeckling performance.}Comment: 5 pages, 5 figures, to appear in IEEE Signal Processing Letter

    An Improved Fuzzy Connected Image Segmentation Method Base on CUDA

    Get PDF
    已有的模糊连接并行算法CUDA-k FOE未考虑线程块边缘点同时更新所引发的竞争问题,导致计算结果出现少量误差.由于医学图像处理对精度的要求很高,为了解决边缘点计算误差的问题,基于CUDA-k FOE提出一种修正迭代算法.首先分析了CUDA-k FOE算法在线程块边缘产生竞争的原因;然后讨论了边缘点亲和力的所有可能的传递路径,以及由此造成的出错情况;最后提出二次迭代修正算法,将第一次迭代得到的所有边缘点转入第二次的修正迭代步骤,从而修正第一次迭代中错误的亲和力值.采用3组不同规格的CT序列对肝脏血管进行分割实验,并选用3个不同的种子点进行算法验证,结果表明,文中算法的计算结果与串行版本一致,解决了CUDA-k FOE算法的计算误差问题.A paralleled CUDA version of k FOE(CUDA-k FOE)was proposed to segment medical images. CUDA-k FOE achieves fast segmentation when processing large image datasets. However, it cannot precisely handle the competition of edge points when update operations happen by multiple threads simultaneously, thus an iterative correction method to improve CUDA-k FOE was proposed. By analyzing all the pathways of marginal voxels affinity and their consequently caused results, a two iteration correction scheme is employed to achieve the accurate calculation. In these two iterations, the resulted marginal voxels from the first iteration are used as the correction input of the second iteration, therefore, the values of affinity are corrected in the second iteration. Experiments are conducted on three CT image sequences of liver vessels with small, medium, and large size. By choosing three different seed points, final results are not only comparable to the sequential implementation of fuzzy connected image segmentation algorithm on CPU, but achieve more precise calculation compared with CUDA-k FOE.国家自然科学基金(61001144;61102137;61301010;61327001

    Translation Speed Compensation in the Dorsal Aspect of the Medial Superior Temporal Area

    Get PDF
    The dorsal aspect of the medial superior temporal area (MSTd) is involved in the computation of heading direction from the focus of expansion (FOE) of the visual image. Our laboratory previously found that MSTd neurons adjust their focus tuning curves to compensate for shifts in the FOE produced by eye rotation (Bradley et al., 1996) as well as for changes in pursuit speed (Shenoy et al., 2002). The translation speed of an observer also affects the shift of the FOE. To investigate whether MSTd neurons can adjust their focus tuning curves to compensate for varying translation speeds, we recorded extracellular responses from 93 focus-tuned MSTd neurons in two rhesus monkeys (Macaca mulatta) performing pursuit eye movements across displays of varying translation speeds. We found that MSTd neurons had larger shifts in their tuning curves for slow translation speeds and smaller shifts for fast translation speeds. These shifts aligned the focus tuning curves with the true heading direction and not with the retinal position of the FOE. Because the eye was pursuing at the same rate for varying translation speeds, these results indicate that retinal cues related both to translation speed and extraretinal signals from pursuit eye movements are used by MSTd neurons to compute heading direction

    An Analog VLSI Chip for Estimating the Focus of Expansion

    Get PDF
    For applications involving the control of moving vehicles, the recovery of relative motion between a camera and its environment is of high utility. This thesis describes the design and testing of a real-time analog VLSI chip which estimates the focus of expansion (FOE) from measured time-varying images. Our approach assumes a camera moving through a fixed world with translational velocity; the FOE is the projection of the translation vector onto the image plane. This location is the point towards which the camera is moving, and other points appear to be expanding outward from. By way of the camera imaging parameters, the location of the FOE gives the direction of 3-D translation. The algorithm we use for estimating the FOE minimizes the sum of squares of the differences at every pixel between the observed time variation of brightness and the predicted variation given the assumed position of the FOE. This minimization is not straightforward, because the relationship between the brightness derivatives depends on the unknown distance to the surface being imaged. However, image points where brightness is instantaneously constant play a critical role. Ideally, the FOE would be at the intersection of the tangents to the iso-brightness contours at these "stationary" points. In practice, brightness derivatives are hard to estimate accurately given that the image is quite noisy. Reliable results can nevertheless be obtained if the image contains many stationary points and the point is found that minimizes the sum of squares of the perpendicular distances from the tangents at the stationary points. The FOE chip calculates the gradient of this least-squares minimization sum, and the estimation is performed by closing a feedback loop around it. The chip has been implemented using an embedded CCD imager for image acquisition and a row-parallel processing scheme. A 64 x 64 version was fabricated in a 2um CCD/ BiCMOS process through MOSIS with a design goal of 200 mW of on-chip power, a top frame rate of 1000 frames/second, and a basic accuracy of 5%. A complete experimental system which estimates the FOE in real time using real motion and image scenes is demonstrated

    Expansion-based passive ranging

    Get PDF
    This paper describes a new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they have been used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts--as well as the other parameters--can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline resulting in a proportionately higher depth accuracy
    corecore